Skip to content

Add examples on how to profile a pipeline#13356

Merged
sayakpaul merged 44 commits intomainfrom
profiling-workflow
Apr 3, 2026
Merged

Add examples on how to profile a pipeline#13356
sayakpaul merged 44 commits intomainfrom
profiling-workflow

Conversation

@sayakpaul
Copy link
Copy Markdown
Member

@sayakpaul sayakpaul commented Mar 28, 2026

What does this PR do?

TL;DR: Adds a guide on how to profile a pipeline and fix issues like CPU overhead, CPU<->GPU syncs, etc.

Motivation

Since we provide first-class torch.compile support, it's important that our pipelines are set up for optimal success with it. This includes spotting any obvious issues that plague the torch.compile performance -- CPU overhead, CPU<->GPU syncs, graphbreaks, kernel launch delays, etc.

The best way to spot these bugs is to profile a pipeline, as it gives a granular measurement of where the GPU is spending time and if it is doing so in an expected manner. We can then uncover any unexpected issues and eventually fix them.

Workflow

The README.md added in the PR has all the descriptions, but in summary:

  • take a popular pipeline like Flux/Flux2/QwenImage/Wan/LTX2
  • run the profile with 2 inference steps
  • load the trace on Perfetto
  • spot the potential suspects
  • piggy that back to Claude along with the trace
    • ask it to attempt a fix
    • review the fix
    • compare the results

With this Workflow, I was able to fix some issues in the Flux2 Klein pipeline and the Wan pipeline. All changes look quite harmless to me.

Plan

Not only is it helpful to profile pipelines to get a ceiling on performance, but the community could also help us improve our pipelines should this workflow prove to be useful.

Note to reviewers

Please review the changes in src/diffusers/*. And you can skip straight to the "Afterwards" section in the README.md document.

The tutorial is currently available here. Some inline comments.

@sayakpaul sayakpaul requested review from DN6, dg845 and stevhliu March 28, 2026 04:41
@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Copy Markdown
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

super educational, i enjoyed reading this a lot!

  • maybe rename "Approach" to something like "How the tooling works" because it describes how it works rather than what the user should do
  • it seems like "Afterwards" may be more effective as a blog post as it tells a story about issues 1 and 2 in the "What to look for" section
  • could be useful to add a link to this doc from our torch.compile docs


To inspect this: zoom into a single denoising step, select a CUDA kernel on the GPU row, and look at the corresponding CPU-side launch slice directly above it. The horizontal offset between them is the launch latency. In a healthy trace, CPU launch slices should be well ahead of GPU execution (the CPU is "feeding" the GPU faster than it can consume).

### Quick checklist per pipeline
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very helpful!

@sayakpaul sayakpaul requested a review from stevhliu March 31, 2026 04:24
Copy link
Copy Markdown
Collaborator

@dg845 dg845 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! Left some questions/suggestions :).

sayakpaul and others added 9 commits April 3, 2026 08:47
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
@sayakpaul
Copy link
Copy Markdown
Member Author

@dg845 thanks for the comments, very helpful. I think you also advocated for the end user which is exactly what I was looking for in the reviews.

I have made the changes as requested. PTAL.

@sayakpaul sayakpaul requested a review from dg845 April 3, 2026 07:33
Copy link
Copy Markdown
Collaborator

@dg845 dg845 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

sayakpaul and others added 2 commits April 3, 2026 10:18
Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
@sayakpaul sayakpaul merged commit b114620 into main Apr 3, 2026
26 of 29 checks passed
@sayakpaul sayakpaul deleted the profiling-workflow branch April 3, 2026 14:13
varaprasadtarunkumar added a commit to varaprasadtarunkumar/diffusers that referenced this pull request Apr 3, 2026
…-step DtoH sync

When zero_cond_t=True, the modulate_index tensor was being recreated on
every transformer forward pass (once per denoising step) using:

    torch.tensor(list_comprehension, device=timestep.device, ...)

This triggers a Python list comprehension + torch.tensor() from a Python
list, which causes a cudaMemcpyAsync + cudaStreamSynchronize (DtoH sync)
that forces the CPU to wait for all pending GPU kernels.

Since img_shapes (which fully determines modulate_index) is fixed for the
entire inference run, the resulting tensor is identical across all steps.
We cache it in _modulate_index_cache keyed by (img_shapes, device), so
the tensor is built only on the first step and reused thereafter.

This eliminates N-1 unnecessary torch.tensor() constructions and DtoH
syncs during inference (where N = num_inference_steps).

This issue was identified in the profiling guide added in huggingface#13356 and
referenced in huggingface#13401.

Follows the same caching pattern as _compute_video_freqs in QwenEmbedRope.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Anything related to performance improvements, profiling and benchmarking

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants